3 results
Citation Metrics for Legal Information Retrieval: Scholars and Practitioners Intertwined?
- Gineke Wiggers, Suzan Verberne, Gerrit-Jan Zwenne
-
- Journal:
- Legal Information Management / Volume 22 / Issue 2 / June 2022
- Published online by Cambridge University Press:
- 15 August 2022, pp. 88-103
- Print publication:
- June 2022
-
- Article
- Export citation
-
This paper is written by Gineke Wiggers, Suzan Verberne and Gerrit-Jan Zwenne and examines citations in legal documents in the context of bibliometric-enhanced legal information retrieval. It is suggested that users of legal information retrieval systems wish to see both scholarly and non-scholarly information, and legal information retrieval systems are developed to be used by both scholarly and non-scholarly users. Since the use of citations in building arguments plays an important role in the legal domain, bibliometric information (such as citations) is an instrument to enhance legal information retrieval systems. This paper examines, through literature and data analysis, whether a bibliometric-enhanced ranking for legal information retrieval should consider both scholarly and nonscholarly publications, and whether this ranking could serve both user groups, or whether a distinction needs to be made. Their literature analysis suggests that for legal documents, there is no strict separation between scholarly and non-scholarly documents. There is no clear mark by which the two groups can be separated, and in as far as a distinction can be made, literature shows that both scholars and practitioners (non-scholars) use both types. They perform a data analysis to analyze this finding for legal information retrieval in practice, using citation and usage data from a legal search engine in the Netherlands. They first create a method to classify legal documents as either scholarly or non-scholarly based on criteria found in the literature. We then semi- automatically analyze a set of seed documents and register by what (type of) documents they are cited. This resulted in a set of 52 cited (seed) documents and 3086 citing documents. Based on the affiliation of users of the search engine, we analyzed the relation between user group and document type. The authors’ data analysis confirms the literature analysis and shows much crosscitations between scholarly and non-scholarly documents. In addition, we find that scholarly users often open non-scholarly documents and vice versa. Our results suggest that for use in legal information retrieval systems citations in legal documents measure part of a broad scope of impact, or relevance, on the entire legal field. This means that for bibliometric-enhanced ranking in legal information retrieval, both scholarly and non-scholarly documents should be considered. The disregard by both scholarly and non-scholarly users of the distinction between scholarly and non-scholarly publications also suggests that the affiliation of the user is not likely a suitable factor to differentiate rankings on. The data in combination with literature suggests that a differentiation on user intent might be more suitable.
Exploration of Domain Relevance by Legal Professionals in Information Retrieval Systems
- Gineke Wiggers, Suzan Verberne, Gerrit-Jan Zwenne, Wouter Van Loon
-
- Journal:
- Legal Information Management / Volume 22 / Issue 1 / March 2022
- Published online by Cambridge University Press:
- 27 April 2022, pp. 49-67
- Print publication:
- March 2022
-
- Article
- Export citation
-
This paper, written by Gineke Wiggers, Suzan Verberne, Gerrit-Jan Zwenne and Wouter Van Loon, addresses the concept of ‘relevance’ in relation to legal information retrieval (IR). They investigate whether the conceptual framework of relevance in legal IR, as described by Van Opijnen and Santos in their paper published in 2017, can be confirmed in practice.1 The research is conducted with a user questionnaire in which users of a legal IR system had to choose which of two results they would like to see ranked higher for a query and were asked to provide a reason for their choice. To avoid questions with an obvious answer and extract as much information as possible about the reasoning process, the search results were chosen to differ on relevance factors from the literature, where one result scores high on one factor, and the other on another factor. The questionnaire had eleven pairs of search results. A total of 43 legal professionals participated consisting of 14 legal information specialists, 6 legal scholars and 23 legal practitioners. The results confirmed the existence of domain relevance as described in the theoretical framework by Van Opijnen and Santos as published in 2017.2 Based on the factors mentioned by the respondents, the authors of this paper concluded that document type, recency, level of depth, legal hierarchy, authority, usability and whether a document is annotated are factors of domain relevance that are largely independent of the task context. The authors also investigated whether different sub-groups of users of legal IR systems (legal information specialists who are searching for others, legal scholars and also for legal practitioners) differ in terms of the factors they consider in judging the relevance of legal documents outside of a task context. Using a PERMANOVA there was found to be no significant difference in the factors reported by these groups. At this moment there is no reason to treat these sub-groups differently in legal IR systems.
Query-based summarization of discussion threads
- Suzan Verberne, Emiel Krahmer, Sander Wubben, Antal van den Bosch
-
- Journal:
- Natural Language Engineering / Volume 26 / Issue 1 / January 2020
- Published online by Cambridge University Press:
- 16 April 2019, pp. 3-29
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
In this paper, we address query-based summarization of discussion threads. New users can profit from the information shared in the forum, Please check if the inserted city and country names in the affiliations are correct. if they can find back the previously posted information. However, discussion threads on a single topic can easily comprise dozens or hundreds of individual posts. Our aim is to summarize forum threads given real web search queries. We created a data set with search queries from a discussion forum’s search engine log and the discussion threads that were clicked by the user who entered the query. For 120 thread–query combinations, a reference summary was made by five different human raters. We compared two methods for automatic summarization of the threads: a query-independent method based on post features, and Maximum Marginal Relevance (MMR), a method that takes the query into account. We also compared four different word embeddings representations as alternative for standard word vectors in extractive summarization. We find (1) that the agreement between human summarizers does not improve when a query is provided that: (2) the query-independent post features as well as a centroid-based baseline outperform MMR by a large margin; (3) combining the post features with query similarity gives a small improvement over the use of post features alone; and (4) for the word embeddings, a match in domain appears to be more important than corpus size and dimensionality. However, the differences between the models were not reflected by differences in quality of the summaries created with help of these models. We conclude that query-based summarization with web queries is challenging because the queries are short, and a click on a result is not a direct indicator for the relevance of the result.